Analysing Spark data
List of resources useful for analysing measurements and rewards.
Getting evaluated measurements
We have a Node.js script to dry-run a round evaluation, see bin/dry-run.js in https://github.com/filecoin-station/spark-evaluate.
‼️
Remember to get a Glif API access token from https://api.node.glif.io and store it in
.env file in spark-evaluate root directory.❯ echo 'GLIF_TOKEN=<your-token>' >> .envThis script can optionally save the evaluated measurements to a file when you specify the DUMP environment variable. (This is a work in progress, see https://github.com/filecoin-station/spark-evaluate/pull/243)
❯ DUMP=1 node bin/dry-run.js 7970
(...lots of logs...)
Evaluated measurements saved to measurements-7970-all.csvYou can dump only measurements for a given miner:
❯ DUMP=f0123 node bin/dry-run.js 7970
(...lots of logs...)
Storing measurements for miner id f0123
Evaluated measurements saved to measurements-7970-f0123.csvor a given participant address:
❯ DUMP=0xdead node bin/dry-run.js 7970
(...lots of logs...)
Storing measurements from participant address 0xdead
Evaluated measurements saved to measurements-7970-0xdead.csvWorking with measurements CSV
You can load CSV to Excel or Google Spreadsheets. You can also use Unix text processing tools like cut.
Count the number of unique IPv4 subnets:
❯ cut -d, -f4 < measurements-7970-0xB63153AE08c1D7b4f0DeE0b0df725F3A9b8CdaAE.csv | sort -u | wc -l
877Count the number of accepted measurements:
❯ cut -d, -f6 < measurements-7970-0xB63153AE08c1D7b4f0DeE0b0df725F3A9b8CdaAE.csv | grep "^OK$" | wc -l
8120Count the number of unique IPv4 subnets in accepted measurements:
❯ cut -d, -f4,6 < measurements-7970-0xB63153AE08c1D7b4f0DeE0b0df725F3A9b8CdaAE.csv | grep ",OK$" | cut -d, -f1 | sort -u | wc -l
872